41 research outputs found

    AIC, Cp and estimators of loss for elliptically symmetric distributions

    Full text link
    In this article, we develop a modern perspective on Akaike's Information Criterion and Mallows' Cp for model selection. Despite the diff erences in their respective motivation, they are equivalent in the special case of Gaussian linear regression. In this case they are also equivalent to a third criterion, an unbiased estimator of the quadratic prediction loss, derived from loss estimation theory. Our first contribution is to provide an explicit link between loss estimation and model selection through a new oracle inequality. We then show that the form of the unbiased estimator of the quadratic prediction loss under a Gaussian assumption still holds under a more general distributional assumption, the family of spherically symmetric distributions. One of the features of our results is that our criterion does not rely on the speci ficity of the distribution, but only on its spherical symmetry. Also this family of laws o ffers some dependence property between the observations, a case not often studied

    On efficient prediction and predictive density estimation for spherically symmetric models

    Get PDF
    International audienceLet X, U, Y be spherically symmetric distributed having density η d+k/2 f η(x − θ| 2 + u 2 + y − cθ 2) , with unknown parameters θ ∈ R d and η > 0, and with known density f and constant c > 0. Based on observing X = x, U = u, we consider the problem of obtaining a predictive densitŷ q(y; x, u) for Y as measured by the expected Kullback-Leibler loss. A benchmark procedure is the minimum risk equivariant densityq mre , which is Generalized Bayes with respect to the prior π(θ, η) = η −1. For d ≥ 3, we obtain improvements onq mre , and further show that the dominance holds simultaneously for all f subject to finite moments and finite risk conditions. We also obtain that the Bayes predictive density with respect to the harmonic prior π h (θ, η) = η −1 θ 2−d dominatesq mre simultaneously for all scale mixture of normals f. The results hinges on duality with a point prediction problem, as well as posterior representations for (θ, η), which are of interest on their own. Namely, we obtain for d ≥ 3, point predictors δ(X, U) of Y that dominate the benchmark predictor cX simultaneously for all f , and simultaneously for risk functions E f ρ (Y − δ(X, U) 2 + (1 + c 2) U 2) , with ρ increasing and concave on R + , and including the squared error case E f (Y − δ(X, U)

    On Bayes and unbiased estimators of loss

    No full text
    Loss estimation, shrinkage estimation, Bayes estimation, unbiased estimation, superharmonicity,
    corecore